推荐兴趣点是一个困难的问题,需要从基于位置的社交媒体平台中提取精确的位置信息。对于这种位置感知的推荐系统而言,另一个具有挑战性和关键的问题是根据用户的历史行为对用户的偏好进行建模。我们建议使用Transformers的双向编码器表示的位置感知建议系统,以便为用户提供基于位置的建议。提出的模型包含位置数据和用户偏好。与在序列中预测每个位置的下一项(位置)相比,我们的模型可以为用户提供更相关的结果。基准数据集上的广泛实验表明,我们的模型始终优于各种最新的顺序模型。
translated by 谷歌翻译
推荐系统,信息检索和其他信息访问系统提出了在非结构化文本中检查和应用公平和偏见缓解概念的独特挑战。本文介绍了DBIAS,这是一个Python包,可确保新闻文章的公平性。DBIAS是一种受过训练的机器学习(ML)管道,可以使用文本(例如,段落或新闻故事),并检测文本是否有偏见。然后,它检测到文本中的有偏见的单词,掩盖它们,并推荐一组带有新单词的句子,这些句子是无偏见或至少偏见的句子。我们结合了数据科学最佳实践的要素,以确保该管道可再现和可用。我们在实验中表明,该管道可以有效缓解偏见,并优于确保新闻文章公平性的常见神经网络体系结构。
translated by 谷歌翻译
支持GPS的移动设备的普及和基于位置的服务的广泛使用导致了产生大量的地理标记数据。最近,数据分析现在可以访问更多来源,包括评论,新闻和图像,其中还提出了关于兴趣点(POI)数据源的可靠性的问题。虽然以前的研究通过各种安全机制试图检测到假POI数据,但目前的工作试图以更简单的方式捕获假POI数据。拟议的工作侧重于监督的学习方法及其能力,以找到基于位置的数据中的隐藏模式。通过真实数据获得地面真理标签,使用API​​生成假数据,因此我们将数据集与位置数据上的实际和假标签进行数据集。目的是使用多层Perceptron(MLP)方法来预测关于POI的真实性。在所提出的工作中,基于数据分类技术的MLP用于准确地对位置数据进行分类。将该方法与传统分类和稳健和近期深神经方法进行比较。结果表明,该方法优于基线方法。
translated by 谷歌翻译
肿瘤浸润淋巴细胞(TIL)的定量已被证明是乳腺癌患者预后的独立预测因子。通常,病理学家对含有tils的基质区域的比例进行估计,以获得TILS评分。乳腺癌(Tiger)挑战中肿瘤浸润淋巴细胞旨在评估计算机生成的TILS评分的预后意义,以预测作为COX比例风险模型的一部分的存活率。在这一挑战中,作为Tiager团队,我们已经开发了一种算法,以将肿瘤与基质与基质进行第一部分,然后将肿瘤散装区域用于TILS检测。最后,我们使用这些输出来生成每种情况的TILS分数。在初步测试中,我们的方法达到了肿瘤 - 细胞瘤的加权骰子评分为0.791,而淋巴细胞检测的FROC得分为0.572。为了预测生存,我们的模型达到了0.719的C索引。这些结果在老虎挑战的初步测试排行榜中获得了第一名。
translated by 谷歌翻译
在过去的二十年中,在遥感(RS)图像中,开发对象检测方法的重大努力。在大多数情况下,遥感图像中的小对象检测的数据集不足。许多研究人员使用了场景分类数据集进行对象检测,这具有其限制;例如,大型对象在对象类别中寡出小对象。因此,他们缺乏多样性;这进一步影响了RS图像中的小对象探测器的检测性能。本文审查了当前数据集和对象检测方法(基于深度学习),用于遥感图像。我们还提出了一种大规模的公开可用的基准遥感超分辨率对象检测(RSSOD)数据集。 RSSOD数据集由1,759个手注释的图像组成,具有22,091个非常高分辨率(VHR)图像,空间分辨率为约0.05米。每个类有五个类别,每个类的标签频率不同。从卫星图像中提取图像贴片,包括真实图像扭曲,例如切向尺度失真和歪斜失真。我们还提出了一种新型多级循环超分辨率生成的对抗网络,具有残余特征聚合(MCGR)和辅助YOLOV5检测器,用于基于基于图像超分辨率的对象检测,并与现有的基于最先进的方法进行比较在图像超分辨率(SR)。与当前最先进的NLSN方法相比,所提出的MCGR为图像SR实现了最新的图像SR性能。 MCGR分别实现了0.758,0.881,0.841和0.983的最佳物体检测映射,分别超过最先进的对象探测器的性能YOLOV5,高效文件,更快的RCNN,SSD和RETINANET。
translated by 谷歌翻译
口腔上皮发育不良(OED)是对口腔的病变给出的恶性肿瘤性组织病理学诊断。预测OED等级或情况是否将转型给恶性肿瘤对于早期检测和适当的治疗至关重要。 OED通常从上皮的下三分之一开始,然后以等级的严重程度向上逐步开始,因此我们提出了分割上皮层,除了单独的细胞核之外,还可以使研究人员能够评估级别/恶性预测的重要层种形态特征。我们呈现悬停网+,深度学习框架,以同时分段(和分类)核和(内部)在H&E染色的载玻片中的核和(内)上皮层。所提出的架构由编码器分支和四个解码器分支组成,用于同时对上皮层的核和语义分割的同时分段。我们表明,拟议的模型在两个任务中实现了最先进的(SOTA)性能,而与每个任务的先前的SOTA方法相比,没有额外的成本。据我们所知,我们的是同时核实例分割和语义组织分割的第一种方法,具有用于其他类似同时任务的计算病理和对恶性预测的研究。
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
Wind power forecasting helps with the planning for the power systems by contributing to having a higher level of certainty in decision-making. Due to the randomness inherent to meteorological events (e.g., wind speeds), making highly accurate long-term predictions for wind power can be extremely difficult. One approach to remedy this challenge is to utilize weather information from multiple points across a geographical grid to obtain a holistic view of the wind patterns, along with temporal information from the previous power outputs of the wind farms. Our proposed CNN-RNN architecture combines convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to extract spatial and temporal information from multi-dimensional input data to make day-ahead predictions. In this regard, our method incorporates an ultra-wide learning view, combining data from multiple numerical weather prediction models, wind farms, and geographical locations. Additionally, we experiment with global forecasting approaches to understand the impact of training the same model over the datasets obtained from multiple different wind farms, and we employ a method where spatial information extracted from convolutional layers is passed to a tree ensemble (e.g., Light Gradient Boosting Machine (LGBM)) instead of fully connected layers. The results show that our proposed CNN-RNN architecture outperforms other models such as LGBM, Extra Tree regressor and linear regression when trained globally, but fails to replicate such performance when trained individually on each farm. We also observe that passing the spatial information from CNN to LGBM improves its performance, providing further evidence of CNN's spatial feature extraction capabilities.
translated by 谷歌翻译
Ensemble learning combines results from multiple machine learning models in order to provide a better and optimised predictive model with reduced bias, variance and improved predictions. However, in federated learning it is not feasible to apply centralised ensemble learning directly due to privacy concerns. Hence, a mechanism is required to combine results of local models to produce a global model. Most distributed consensus algorithms, such as Byzantine fault tolerance (BFT), do not normally perform well in such applications. This is because, in such methods predictions of some of the peers are disregarded, so a majority of peers can win without even considering other peers' decisions. Additionally, the confidence score of the result of each peer is not normally taken into account, although it is an important feature to consider for ensemble learning. Moreover, the problem of a tie event is often left un-addressed by methods such as BFT. To fill these research gaps, we propose PoSw (Proof of Swarm), a novel distributed consensus algorithm for ensemble learning in a federated setting, which was inspired by particle swarm based algorithms for solving optimisation problems. The proposed algorithm is theoretically proved to always converge in a relatively small number of steps and has mechanisms to resolve tie events while trying to achieve sub-optimum solutions. We experimentally validated the performance of the proposed algorithm using ECG classification as an example application in healthcare, showing that the ensemble learning model outperformed all local models and even the FL-based global model. To the best of our knowledge, the proposed algorithm is the first attempt to make consensus over the output results of distributed models trained using federated learning.
translated by 谷歌翻译
Vehicle-to-Everything (V2X) communication has been proposed as a potential solution to improve the robustness and safety of autonomous vehicles by improving coordination and removing the barrier of non-line-of-sight sensing. Cooperative Vehicle Safety (CVS) applications are tightly dependent on the reliability of the underneath data system, which can suffer from loss of information due to the inherent issues of their different components, such as sensors failures or the poor performance of V2X technologies under dense communication channel load. Particularly, information loss affects the target classification module and, subsequently, the safety application performance. To enable reliable and robust CVS systems that mitigate the effect of information loss, we proposed a Context-Aware Target Classification (CA-TC) module coupled with a hybrid learning-based predictive modeling technique for CVS systems. The CA-TC consists of two modules: A Context-Aware Map (CAM), and a Hybrid Gaussian Process (HGP) prediction system. Consequently, the vehicle safety applications use the information from the CA-TC, making them more robust and reliable. The CAM leverages vehicles path history, road geometry, tracking, and prediction; and the HGP is utilized to provide accurate vehicles' trajectory predictions to compensate for data loss (due to communication congestion) or sensor measurements' inaccuracies. Based on offline real-world data, we learn a finite bank of driver models that represent the joint dynamics of the vehicle and the drivers' behavior. We combine offline training and online model updates with on-the-fly forecasting to account for new possible driver behaviors. Finally, our framework is validated using simulation and realistic driving scenarios to confirm its potential in enhancing the robustness and reliability of CVS systems.
translated by 谷歌翻译